Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 66
Filter
1.
Diagnostics (Basel) ; 14(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732305

ABSTRACT

This study aims to evaluate the effectiveness of employing a deep learning approach for the automated detection of pulp stones in panoramic imaging. A comprehensive dataset comprising 2409 panoramic radiography images (7564 labels) underwent labeling using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset was stratified into three distinct subsets: training (n = 1929, 80% of the total), validation (n = 240, 10% of the total), and test (n = 240, 10% of the total) sets. To optimize the visual clarity of labeled regions, a 3 × 3 clash operation was applied to the images. The YOLOv5 architecture was employed for artificial intelligence modeling, yielding F1, sensitivity, and precision metrics of 0.7892, 0.8026, and 0.7762, respectively, during the evaluation of the test dataset. Among deep learning-based artificial intelligence algorithms applied to panoramic radiographs, the use of numerical identification for the detection of pulp stones has achieved remarkable success. It is expected that the success rates of training models will increase by using datasets consisting of a larger number of images. The use of artificial intelligence-supported clinical decision support system software has the potential to increase the efficiency and effectiveness of dentists.

2.
Article in English | MEDLINE | ID: mdl-38632035

ABSTRACT

OBJECTIVE: The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section. STUDY DESIGN: A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset has been partitioned into 3 subsets: training (n = 1000, 80% of the total), validation (n = 124, 10% of the total), and test (n = 124, 10% of the total) sets. The images were subjected to a 3 × 3 clash operation in order to enhance the clarity of the labeled regions. RESULTS: The F1, sensitivity and precision results of the artificial intelligence model obtained using the Yolov5 architecture in the test dataset were found to be 0.9913, 0.9954, and 0.9873, respectively. CONCLUSION: The utilization of numerical identification for teeth within deep learning-based artificial intelligence algorithms applied to bitewing radiographs has demonstrated notable efficacy. The utilization of clinical decision support system software, which is augmented by artificial intelligence, has the potential to enhance the efficiency and effectiveness of dental practitioners.

3.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658959

ABSTRACT

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Subject(s)
Algorithms , Deep Learning , Photography, Dental , Humans , Image Processing, Computer-Assisted/methods , Photography, Dental/methods , Pilot Projects
4.
J Stomatol Oral Maxillofac Surg ; : 101817, 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38458545

ABSTRACT

OBJECTIVE: The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS: The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS: Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION: The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri­surgical period.

5.
J Clin Pediatr Dent ; 48(2): 173-180, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38548647

ABSTRACT

One of the most common congenital anomalies of the head and neck region is a cleft lip and palate. This retrospective case-control research aimed to compare the maxillary sinus volumes in individuals with bilateral cleft lip and palate (BCLP) to a non-cleft control group. The study comprised 72 participants, including 36 patients with BCLP and 36 gender and age-matched control subjects. All topographies were obtained utilizing Cone Beam Computed Tomography (CBCT) for diagnostic purposes, and 3D Dolphin software was utilized for sinus segmentation. Volumetric measurements were taken in cubic millimeters. No significant differences were found between the sex and age distributions of both groups. Additionally, there was no statistically significant difference observed between the BCLP group and the control group on the right and left sides (p > 0.05). However, the mean maxillary sinus volumes of BCLP patients (8014.26 ± 2841.03 mm3) were significantly lower than those of the healthy control group (11,085.21 ± 3146.12 mm3) (p < 0.05). The findings of this study suggest that clinicians should be aware of the lower maxillary sinus volumes in BCLP patients when planning surgical interventions. The utilization of CBCT and sinus segmentation allowed for precise measurement of maxillary sinus volumes, contributing to the existing literature on anatomical variations in BCLP patients.


Subject(s)
Cleft Lip , Cleft Palate , Humans , Cleft Lip/diagnostic imaging , Cleft Palate/diagnostic imaging , Cleft Palate/surgery , Maxillary Sinus/diagnostic imaging , Retrospective Studies , Cone-Beam Computed Tomography/methods
6.
Dentomaxillofac Radiol ; 53(4): 256-266, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38502963

ABSTRACT

OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.


Subject(s)
Artificial Intelligence , Cone-Beam Computed Tomography , Maxillary Sinus , Cone-Beam Computed Tomography/methods , Humans , Maxillary Sinus/diagnostic imaging , Software , Female , Male , Adult
7.
BMC Oral Health ; 24(1): 155, 2024 Jan 31.
Article in English | MEDLINE | ID: mdl-38297288

ABSTRACT

BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.


Subject(s)
Alveolar Bone Loss , Deep Learning , Furcation Defects , Humans , Alveolar Bone Loss/diagnostic imaging , Radiography, Panoramic/methods , Retrospective Studies , Furcation Defects/diagnostic imaging , Artificial Intelligence , Algorithms
8.
Odontology ; 112(2): 552-561, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37907818

ABSTRACT

The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment. Labeled images were divided into training (80%), validation (10%) and testing (10%) groups. The artificial intelligence (AI) model was trained using the You Only Look Once v5 (YOLOv5x) architecture with 500 epochs and a learning rate of 0.01. Confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results. The sensitivity of the MB2 canal segmentation model was 0.92, the precision was 0.83, and the F1 score value was 0.87. The area under the curve (AUC) in the ROC graph of the model was 0.84. The mAP value at 0.5 inter-over union (IoU) was found as 0.88. The deep-learning algorithm used showed a high success in the detection of the MB2 canal. The success of the endodontic treatment can be increased and clinicians' time can be preserved using the newly created artificial intelligence-based models to identify variations in root canal anatomy before the treatment.


Subject(s)
Artificial Intelligence , Dental Pulp Cavity , Humans , Dental Pulp Cavity/diagnostic imaging , Tooth Root , Maxilla/anatomy & histology , Cone-Beam Computed Tomography/methods
9.
BMC Oral Health ; 23(1): 764, 2023 10 17.
Article in English | MEDLINE | ID: mdl-37848870

ABSTRACT

BACKGROUND: Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS: A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS: A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS: The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.


Subject(s)
Anatomic Landmarks , Artificial Intelligence , Humans , Child , Radiography, Panoramic/methods , Anatomic Landmarks/diagnostic imaging , Reproducibility of Results , Mandible/diagnostic imaging
10.
J Oral Implantol ; 49(4): 344-345, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37527149
11.
Sci Prog ; 106(2): 368504231178382, 2023.
Article in English | MEDLINE | ID: mdl-37262004

ABSTRACT

OBJECTIVES: This study aimed to determine mastoid emissary canal's (MEC) and mastoid foramen (MF) prevalence and morphometric characteristics on cone-beam computed tomography (CBCT) images to underline its clinical significance and discuss its surgical consequences. METHODS: In the retrospective analysis, two oral and maxillofacial radiologists analyzed the CBCT images of 135 patients (270 sides). The biggest MF and MEC were measured in the images evaluated in MultiPlanar Reconstruction (MPR) views. The MF and MEC mean diameters were calculated. The mastoid foramina number was recorded. The prevalence of MF was studied according to gender and side of the patient. RESULTS: The overall prevalence of MEC and MF was 119 (88.1%). The prevalence of MEC and MF is 55.5% in females and 44.5% in males. MEC and MF were identified as bilateral in 80 patients (67.20%) and unilateral in 39 patients (32.80%). The mean diameter of MF was 2.4 ± 0.9 mm. The mean height of MF was 2.3 ± 0.9. The mean diameter of the MEC was 2.1 ± 0.8, and the mean height of the MEC was 2.1 ± 0.8. There is a statistical difference between the genders (p = 0.043) in foramen diameter. Males had a significantly larger mean diameter of MF in comparison to females. CONCLUSION: MEC and MF must be evaluated thoroughly if the surgery is contemplated. Radiologists and surgeons should be aware of mastoid emissary canal morphology, variations, clinical relevance, and surgical consequences while operating in the suboccipital and mastoid areas to avoid unexpected and catastrophic complications. CBCT may be a reliable imaging diagnostic technique.


Subject(s)
Cone-Beam Computed Tomography , Mastoid , Humans , Male , Female , Mastoid/diagnostic imaging , Mastoid/anatomy & histology , Retrospective Studies , Cone-Beam Computed Tomography/methods , Prevalence , Clinical Relevance
12.
Quintessence Int ; 54(8): 680-693, 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37313576

ABSTRACT

OBJECTIVES: This study aimed to develop an artificial intelligence (AI) model that can determine automatic tooth numbering, frenulum attachments, gingival overgrowth areas, and gingival inflammation signs on intraoral photographs and to evaluate the performance of this model. METHOD AND MATERIALS: A total of 654 intraoral photographs were used in the study (n = 654). All photographs were reviewed by three periodontists, and all teeth, frenulum attachment, gingival overgrowth areas, and gingival inflammation signs on photographs were labeled using the segmentation method in a web-based labeling software. In addition, tooth numbering was carried out according to the FDI system. An AI model was developed with the help of YOLOv5x architecture with labels of 16,795 teeth, 2,493 frenulum attachments, 1,211 gingival overgrowth areas, and 2,956 gingival inflammation signs. The confusion matrix system and ROC (receiver operator characteristic) analysis were used to statistically evaluate the success of the developed model. RESULTS: The sensitivity, precision, F1 score, and AUC (area under the curve) for tooth numbering were 0.990, 0.784, 0.875, and 0.989; for frenulum attachment these were 0.894, 0.775, 0.830, and 0.827; for gingival overgrowth area these were 0.757, 0.675, 0.714, and 0.774; and for gingival inflammation sign 0.737, 0.823, 0.777, and 0.802, respectively. CONCLUSION: The results of the present study show that AI systems can be successfully used to interpret intraoral photographs. These systems have the potential to accelerate the digital transformation in the clinical and academic functioning of dentistry with the automatic determination of anatomical structures and dental conditions from intraoral photographs.


Subject(s)
Gingival Overgrowth , Gingivitis , Tooth , Humans , Retrospective Studies , Artificial Intelligence , Gingivitis/diagnosis , Neural Networks, Computer , Algorithms , Inflammation
13.
Diagnostics (Basel) ; 13(10)2023 May 19.
Article in English | MEDLINE | ID: mdl-37238284

ABSTRACT

The assessment of alveolar bone loss, a crucial element of the periodontium, plays a vital role in the diagnosis of periodontitis and the prognosis of the disease. In dentistry, artificial intelligence (AI) applications have demonstrated practical and efficient diagnostic capabilities, leveraging machine learning and cognitive problem-solving functions that mimic human abilities. This study aims to evaluate the effectiveness of AI models in identifying alveolar bone loss as present or absent across different regions. To achieve this goal, alveolar bone loss models were generated using the PyTorch-based YOLO-v5 model implemented via CranioCatch software, detecting periodontal bone loss areas and labeling them using the segmentation method on 685 panoramic radiographs. Besides general evaluation, models were grouped according to subregions (incisors, canines, premolars, and molars) to provide a targeted evaluation. Our findings reveal that the lowest sensitivity and F1 score values were associated with total alveolar bone loss, while the highest values were observed in the maxillary incisor region. It shows that artificial intelligence has a high potential in analytical studies evaluating periodontal bone loss situations. Considering the limited amount of data, it is predicted that this success will increase with the provision of machine learning by using a more comprehensive data set in further studies.

14.
J Oral Rehabil ; 50(9): 758-766, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37186400

ABSTRACT

BACKGROUND: The use of artificial intelligence has many advantages, especially in the field of oral and maxillofacial radiology. Early diagnosis of temporomandibular joint osteoarthritis by artificial intelligence may improve prognosis. OBJECTIVE: The aim of this study is to perform the classification of temporomandibular joint (TMJ) osteoarthritis and TMJ segmentation on cone beam computed tomography (CBCT) sagittal images with artificial intelligence. METHODS: In this study, the success of YOLOv5 architecture, an artificial intelligence model, in TMJ segmentation and osteoarthritis classification was evaluated on 2000 sagittal sections (500 healthy, 500 erosion, 500 osteophyte, 500 flattening images) obtained from CBCT DICOM images of 290 patients. RESULTS: The sensitivity, precision and F1 scores of the model for TMJ osteoarthritis classification are 1, 0.7678 and 0.8686, respectively. The accuracy value for classification is 0.7678. The prediction values of the classification model are 88% for healthy joints, 70% for flattened joints, 95% for joints with erosion and 86% for joints with osteophytes. The sensitivity, precision and F1 score of the YOLOv5 model for TMJ segmentation are 1, 0.9953 and 0.9976, respectively. The AUC value of the model for TMJ segmentation is 0.9723. In addition, the accuracy value of the model for TMJ segmentation was found to be 0.9953. CONCLUSION: Artificial intelligence model applied in this study can be a support method that will save time and convenience for physicians in the diagnosis of the disease with successful results in TMJ segmentation and osteoarthritis classification.


Subject(s)
Osteoarthritis , Temporomandibular Joint Disorders , Humans , Temporomandibular Joint Disorders/diagnostic imaging , Artificial Intelligence , Temporomandibular Joint/diagnostic imaging , Cone-Beam Computed Tomography/methods , Osteoarthritis/diagnostic imaging
15.
Proc Inst Mech Eng H ; 237(6): 706-718, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37211725

ABSTRACT

The morphology of the finger bones in hand-wrist radiographs (HWRs) can be considered as a radiological skeletal maturity indicator, along with the other indicators. This study aims to validate the anatomical landmarks envisaged to be used for classification of the morphology of the phalanges, by developing classical neural network (NN) classifiers based on a sub-dataset of 136 HWRs. A web-based tool was developed and 22 anatomical landmarks were labeled on four region of interests (proximal (PP3), medial (MP3), distal (DP3) phalanges of the third and medial phalanx (MP5) of the fifth finger) and the epiphysis-diaphysis relationships were saved as "narrow,""equal,""capping" or "fusion" by three observers. In each region, 18 ratios and 15 angles were extracted using anatomical points. The data set is analyzed by developing two NN classifiers, without (NN-1) and with (NN-2) the 5-fold cross-validation. The performance of the models was evaluated with percentage of agreement, Cohen's (cκ) and Weighted (wκ) Kappa coefficients, precision, recall, F1-score and accuracy (statistically significance: p < 0.05). Method error was found to be in the range of cκ: 0.7-1. Overall classification performance of the models was changed between 82.14% and 89.29%. On average, performance of the NN-1 and NN-2 models were found to be 85.71% and 85.52%, respectively. The cκ and wκ of the NN-1 model were changed between -0.08 (p > 0.05) and 0.91 among regions. The average performance was found to be promising except the regions without adequate samples and the anatomical points are validated to be used in the future studies, initially.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Pilot Projects , Radiography , Hand
16.
Sci Prog ; 106(1): 368504231157146, 2023.
Article in English | MEDLINE | ID: mdl-36855800

ABSTRACT

OBJECTIVE: This study aimed to examine the morphological characteristics of the nasopharynx in unilateral Cleft lip/palate (CL/P) children and non-cleft children using cone beam computed tomography (CBCT). METHODS: A retrospective study consisted of 54 patients, of which 27 patients were unilateral CL/P, remaining 27 patients have no CL/P. Eustachian tubes orifice (ET), Rosenmuller fossa (RF) depth, presence of pharyngeal bursa (PB), the distance of posterior nasal spine (PNS)-pharynx posterior wall were quantitatively evaluated. RESULTS: The main effect of the CL/P groups was found to be effective on RF depth-right (p < 0.001) and RF depth-left (p < 0.001). The interaction effect of gender and CL/P groups was not influential on measurements. The cleft-side main effect was found to be effective on RF depth-left (p < 0.001) and RF depth-right (p = 0002). There was no statistically significant relationship between CL/P groups and the presence of bursa pharyngea. CONCLUSIONS: Because it is the most common site of nasopharyngeal carcinoma (NPC), the anatomy of the nasopharynx should be well known in the early diagnosis of NPC.


Subject(s)
Cleft Lip , Cleft Palate , Humans , Child , Cleft Palate/diagnostic imaging , Retrospective Studies , Cone-Beam Computed Tomography , Nasopharynx/diagnostic imaging
17.
Diagnostics (Basel) ; 13(4)2023 Feb 04.
Article in English | MEDLINE | ID: mdl-36832069

ABSTRACT

This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.

18.
Oral Radiol ; 39(1): 207-214, 2023 01.
Article in English | MEDLINE | ID: mdl-35612677

ABSTRACT

OBJECTIVES: Artificial intelligence (AI) techniques like convolutional neural network (CNN) are a promising breakthrough that can help clinicians analyze medical imaging, diagnose taurodontism, and make therapeutic decisions. The purpose of the study is to develop and evaluate the function of CNN-based AI model to diagnose teeth with taurodontism in panoramic radiography. METHODS: 434 anonymized, mixed-sized panoramic radiography images over the age of 13 years were used to develop automatic taurodont tooth segmentation models using a Pytorch implemented U-Net model. Datasets were split into train, validation, and test groups of both normal and masked images. The data augmentation method was applied to images of trainings and validation groups with vertical flip images, horizontal flip images, and both flip images. The Confusion Matrix was used to determine the model performance. RESULTS: Among the 43 test group images with 126 labels, there were 109 true positives, 29 false positives, and 17 false negatives. The sensitivity, precision, and F1-score values of taurodont tooth segmentation were 0.8650, 0.7898, and 0.8257, respectively. CONCLUSIONS: CNN's ability to identify taurodontism produced almost identical results to the labeled training data, and the CNN system achieved close to the expert level results in its ability to detect the taurodontism of teeth.


Subject(s)
Artificial Intelligence , Deep Learning , Radiography, Panoramic , Neural Networks, Computer , Algorithms
19.
J Stomatol Oral Maxillofac Surg ; 124(1): 101264, 2023 02.
Article in English | MEDLINE | ID: mdl-35964938

ABSTRACT

INTRODUCTION: Deep learning methods have recently been applied for the processing of medical images, and they have shown promise in a variety of applications. This study aimed to develop a deep learning approach for identifying oral lichen planus lesions using photographic images. MATERIAL AND METHODS: Anonymous retrospective photographic images of buccal mucosa with 65 healthy and 72 oral lichen planus lesions were identified using the CranioCatch program (CranioCatch, Eskisehir, Turkey). All images were re-checked and verified by Oral Medicine and Maxillofacial Radiology experts. This data set was divided into training (n = 51; n = 58), verification (n = 7; n = 7), and test (n = 7; n = 7) sets for healthy mucosa and mucosa with the oral lichen planus lesion, respectively. In the study, an artificial intelligence model was developed using Google Inception V3 architecture implemented with Tensorflow, which is a deep learning approach. RESULTS: AI deep learning model provided the classification of all test images for both healthy and diseased mucosa with a 100% success rate. CONCLUSION: In the healthcare business, AI offers a wide range of uses and applications. The increased effort increased complexity of the job, and probable doctor fatigue may jeopardize diagnostic abilities and results. Artificial intelligence (AI) components in imaging equipment would lessen this effort and increase efficiency. They can also detect oral lesions and have access to more data than their human counterparts. Our preliminary findings show that deep learning has the potential to handle this significant challenge.


Subject(s)
Deep Learning , Lichen Planus, Oral , Humans , Lichen Planus, Oral/diagnosis , Lichen Planus, Oral/pathology , Retrospective Studies , Artificial Intelligence , Algorithms
20.
Diagnostics (Basel) ; 12(12)2022 Dec 07.
Article in English | MEDLINE | ID: mdl-36553088

ABSTRACT

While a large number of archived digital images make it easy for radiology to provide data for Artificial Intelligence (AI) evaluation; AI algorithms are more and more applied in detecting diseases. The aim of the study is to perform a diagnostic evaluation on periapical radiographs with an AI model based on Convoluted Neural Networks (CNNs). The dataset includes 1169 adult periapical radiographs, which were labelled in CranioCatch annotation software. Deep learning was performed using the U-Net model implemented with the PyTorch library. The AI models based on deep learning models improved the success rate of carious lesion, crown, dental pulp, dental filling, periapical lesion, and root canal filling segmentation in periapical images. Sensitivity, precision and F1 scores for carious lesion were 0.82, 0.82, and 0.82, respectively; sensitivity, precision and F1 score for crown were 1, 1, and 1, respectively; sensitivity, precision and F1 score for dental pulp, were 0.97, 0.87 and 0.92, respectively; sensitivity, precision and F1 score for filling were 0.95, 0.95, and 0.95, respectively; sensitivity, precision and F1 score for the periapical lesion were 0.92, 0.85, and 0.88, respectively; sensitivity, precision and F1 score for root canal filling, were found to be 1, 0.96, and 0.98, respectively. The success of AI algorithms in evaluating periapical radiographs is encouraging and promising for their use in routine clinical processes as a clinical decision support system.

SELECTION OF CITATIONS
SEARCH DETAIL
...